45 research outputs found

    Road Segmentation in SAR Satellite Images with Deep Fully-Convolutional Neural Networks

    Get PDF
    Remote sensing is extensively used in cartography. As transportation networks grow and change, extracting roads automatically from satellite images is crucial to keep maps up-to-date. Synthetic Aperture Radar satellites can provide high resolution topographical maps. However roads are difficult to identify in these data as they look visually similar to targets such as rivers and railways. Most road extraction methods on Synthetic Aperture Radar images still rely on a prior segmentation performed by classical computer vision algorithms. Few works study the potential of deep learning techniques, despite their successful applications to optical imagery. This letter presents an evaluation of Fully-Convolutional Neural Networks for road segmentation in SAR images. We study the relative performance of early and state-of-the-art networks after carefully enhancing their sensitivity towards thin objects by adding spatial tolerance rules. Our models shows promising results, successfully extracting most of the roads in our test dataset. This shows that, although Fully-Convolutional Neural Networks natively lack efficiency for road segmentation, they are capable of good results if properly tuned. As the segmentation quality does not scale well with the increasing depth of the networks, the design of specialized architectures for roads extraction should yield better performances.Comment: 5 pages, accepted for publication in IEEE Geoscience and Remote Sensing Letter

    Saving Lives from Above: Person Detection in Disaster Response Using Deep Neural Networks

    Get PDF
    This paper focuses on person detection in aerial and drone imagery, which is crucial for various operations such as situational awareness, search and rescue, and safe delivery of supplies. We aim to improve disaster response efforts by enhancing the speed, safety, and effectiveness of the process. Therefore, we introduce a new person detection dataset comprising 311 annotated aerial and drone images, acquired from helicopters and drones in different scenes, including urban and rural areas, and for different scenarios, such as estimation of damage in disaster-affected zones, and search and rescue operations in different countries. The amount of data considered and level of detail of the annotations resulted in a total of 10,050 annotated persons. To detect people in aerial and drone images, we propose a multi-stage training procedure to improve YOLOv3's ability. The proposed procedure aims at addressing challenges such as variations in scenes, scenarios, people poses, as well as image scales and viewing angles. To evaluate the effectiveness of our proposed training procedure, we split our dataset into a training and a test set. The latter includes images acquired during real search and rescue exercises and operations, and is therefore representative for the challenges encountered during operational missions and suitable for an accurate assessment of the proposed models. Experimental results demonstrate the effectiveness of our proposed training procedure, as the model's average precision exhibits a relevant increase with respect to the baseline value

    Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Get PDF
    Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like scene monitoring over time or the scene analysis after sudden events. These tasks often require the fusion of geo-referenced and precisely co-registered multi-sensor data. Images captured by high resolution synthetic aperture radar (SAR) satellites have an absolute geo-location accuracy within few decimeters. This renders SAR images interesting as a source for the geo-location improvement of optical images, whose geo-location accuracy is in the range of some meters. In this paper, we are investigating a deep learning based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data. Image registration between SAR and optical satellite images requires few but accurate and reliable matching points. To derive such matching points a neural network based on a Siamese network architecture was trained to learn the two dimensional spatial shift between optical and SAR image patches. The neural network was trained over TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe. The results of the proposed method confirm that accurate and reliable matching points are generated with a higher matching accuracy and precision than state-of-the-art approaches

    Road condition assessment from aerial imagery using deep learning

    Get PDF
    Terrestrial sensors are commonly used to inspect and document the condition of roads at regular intervals and according to defined rules. For example in Germany, extensive data and information is obtained, which is stored in the Federal Road Information System and made available in particular for deriving necessary decisions. Transverse and longitudinal evenness, for example, are recorded by vehicles using laser techniques. To detect damage to the road surface, images are captured and recorded using area or line scan cameras. All these methods provide very accurate information about the condition of the road, but are time-consuming and costly. Aerial imagery (e.g. multi- or hyperspectral, SAR) provide an additional possibility for the acquisition of the specific parameters describing the condition of roads, yet a direct transfer from objects extractable from aerial imagery to the required objects or parameters, which determine the condition of the road is difficult and in some cases impossible. In this work, we investigate the transferability of objects commonly used for the terrestrial-based assessment of road surfaces to an aerial image-based assessment. In addition, we generated a suitable dataset and developed a deep learning based image segmentation method capable of extracting two relevant road condition parameters from high-resolution multispectral aerial imagery, namely cracks and working seams. The obtained results show that our models are able to extraction these thin features from aerial images, indicating the possibility of using more automated approaches for road surface condition assessment in the future

    Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching

    Get PDF
    Tasks such as the monitoring of natural disasters or the detection of change highly benefit from complementary information about an area or a specific object of interest. The required information is provided by fusing high accurate co-registered and geo-referenced datasets. Aligned high resolution optical and synthetic aperture radar (SAR) data additionally enables an absolute geo-location accuracy improvement of the optical images by extracting accurate and reliable ground control points (GCPs) from the SAR images. In this paper we investigate the applicability of a deep learning based matching concept for the generation of precise and accurate GCPs from SAR satellite images by matching optical and SAR images. To this end, conditional generative adversarial networks (cGANs) are trained to generate SAR-like image patches from optical images. For training and testing, optical and SAR image patches are extracted from TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe. The artificially generated patches are then used to improve the conditions for three known matching approaches based on normalized cross-correlation (NCC), SIFT and BRISK, which are normally not usable for the matching of optical and SAR images. The results validate that a NCC, SIFT and BRISK based matching greatly benefit, in terms of matching accuracy and precision, from the use of the artificial templates. The comparison with two state-of-the-art optical and SAR matching approaches shows the potential of the proposed method but also revealed some challenges and the necessity for further developments

    SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks - Optimization, Opportunities and Limits

    Get PDF
    Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations

    Detecting and Estimating On-street Parking Areas from Aerial Images

    Get PDF
    Parking is an essential part of transportation systems and urban planning, but the availability of data on parking is limited and therefore posing problems, for example, estimating search times for parking spaces in travel demand models. This paper presents an on-street parking area prediction model developed using remote sensing and open geospatial data of the German city of Brunswick. Neural networks are used to segment the aerial images in parking and street areas. To enhance the robustness of this detection, multiple predictions over same regions are fused. We enrich this information with publicly available data and formulate a Bayesian inference model to predict the parking area per street meter. The model is estimated and validated using detected parking areas from the aerial images. We find that the prediction accuracy of the parking area model at mid to high levels of parking area per street meter is good, but at lower levels uncertainty increases. Using a Bayesian inference model allows the uncertainty of the prediction to be passed on to subsequent applications to track error propagation. Since only open source data serve as input for the prediction model, a transfer to structurally similar regions, for which no aerial images are available, is possible. The model can be used in a wide range of applications like travel demand models, parking regulation and urban planning

    Parking space inventory from above: Detection on aerial images and estimation for unobserved regions

    Get PDF
    Parking is a vital component of today's transportation system and descriptive data are therefore of great importance for urban planning and traffic management. However, data quality is often low: managed parking places may only be partially inventoried, or parking at the curbside and on private ground may be missing. This paper presents a processing chain in which remote sensing data and statistical methods are combined to provide parking area estimates. First, parking spaces and other traffic areas are detected from aerial imagery using a convolutional neural network. Individual image segmentations are fused to increase completeness. Next, a Gamma hurdle model is estimated using the detected parking areas and OpenStreetMap and land use data to predict the parking area adjacent to streets. We find a systematic relationship between the road length and type and the parking area obtained. We suggest that our results are informative to those needing information on parking in structurally similar regions

    Ad-hoc situational awareness during floods using remote sensing data and machine learning methods

    Get PDF
    Recent advances in machine learning and the rise of new large-scale remote sensing datasets have opened new possibilities for automation of remote sensing data analysis that make it possible to cope with the growing data volume and complexity and the inherent spatio-temporal dynamics of disaster situations. In this work, we provide insights into machine learning methods developed by the German Aerospace Center (DLR) for rapid mapping activities and used to support disaster response efforts during the 2021 flood in Western Germany. These include specifically methods related to systematic flood monitoring from Sentinel-1 as well as road-network extraction, object detection and damage assessment from very high-resolution optical satellite and aerial images. We discuss aspects of data acquisition and present results that were used by first responders during the flood disaster
    corecore